SVG Image
< Back to events
The Future of Working with/in AI
17 October 2024

The Future of Working with/in AI

At this event we will share insights from our DCODE project and invite our research partners to share their work. We are looking forward to having you join us in Amsterdam!
 
More information: dcode-network.eu
Date & time: 15:00 - 18:00 (Registration at 14:45 // Drinks at 18:00)
Location: B. Amsterdam BUILDING B3 - Room The Louis(Johan Huizingalaan 400, 1066 JS Amsterdam)
 
DCODE is a European funded network and a program with 15 PhD's. We train researchers and designers to guide society’s digital transformation towards inclusive, sustainable futures. This network consists of research partners Delft University of Technology, Umeå Institute of Design, The University of Edinburgh, University of Copenhagen, Aarhus University, Transport and Telecommunication Institute and Amsterdam University of Applied Sciences. Other partners are Amsterdam Institute for Advanced Metropolitan Solutions, MyTomorrows, Philips, Advanced Care Research Centre, Open Future, LucidMinds, ClearboxAI, BBC and Leiden University Medical Center.
 
Presentations
"Synthetic Data for Machine Learning Model Testing" by Luca Gilli (ClearboxAI)
Synthetic data is increasingly being employed to test machine learning models, providing a valuable tool for ensuring robust and ethical AI systems. This approach is particularly pertinent in the context of the AI Act, which emphasizes the need for transparency, safety, and fairness in AI applications. By generating artificial datasets that mimic real-world data, developers can rigorously test models under diverse scenarios without compromising user privacy or encountering data scarcity issues. This talk will present practical examples of how generated data can increase human oversight over AI models.
 
"AI & Ethics - but whose moral values and how?" by Bulent Ozel (LucidMinds)
There are limits of AI which are not simply technical but are amplified by human bias. The adoption or creation of AI, like any other technical system, is inherently driven by the values of individuals and communities. In this regard, a quest for a trustworthy AI or an ethical AI is a question of value alignment.
The AI value alignment problem is the problem of assigning moral values to machines. But whose values should be aligned to, and what values? While the normative aspects of a value alignment deal with these questions, the technical aspect deals with how to encode values into machines.
In this talk, I will introduce an Augmented Collective Intelligence (ACI) framework where an AI design process is aligned with the community’s values.
 
"AI Act & commons" by Paul Keller (Open Future)
Paul will discuss Open Future's work on the recently enacted AI Act and on building commons based datasets for AI training. What will be the impact of the AI Act on the design, building and research of AI systems in Europe? What design choices arise in the context of building commons based datasets? And how do these elements contribute to the vision of AI systems that serve the public interest rather than the corporate goals of a small set of commercial actors?
"Caring in a data-driven world" by Jacob Sheahan (Advanced Care Research Centre & University of Edinburgh. Data-driven care provision has become integral to improving the quality and sustainability of services in later life, yet predictive and standardised models of care struggle to realise complicated and entangled everyday realities. How we navigate changing worlds, shaped by demographic, technological, political, and societal shifts, has become a key concern ACRC researchers, as we recognise the need to be better equipped to engage unfolding and fluid worlds with the necessary care. As we increasingly centre a need for relational ways of designing, how can we be better equipped to engage unfolding and fluid worlds with the necessary care?